Skip to content

feat(sdk-trace): implement span processor metrics#6504

Open
anuraaga wants to merge 12 commits intoopen-telemetry:mainfrom
anuraaga:span-processor-metrics
Open

feat(sdk-trace): implement span processor metrics#6504
anuraaga wants to merge 12 commits intoopen-telemetry:mainfrom
anuraaga:span-processor-metrics

Conversation

@anuraaga
Copy link
Copy Markdown
Contributor

Which problem is this PR solving?

I am helping to implement SDK internal metrics

https://opentelemetry.io/docs/specs/semconv/otel/sdk-metrics/

This implements span processor metrics

Implementation in Java - https://github.com/open-telemetry/opentelemetry-java/pull/7895/changes#diff-57fb1f394e77f4d9b90d05aa09755a0b75d3c55bed16176c18f87850788dd664

After this PR is merged, I will send a very similar one for log processor. And then SDK metrics should be complete.

/cc @trentm

Short description of the changes

Implement metrics for span processors per semconv

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Unit tests

Checklist:

  • Followed the style guidelines of this project
  • Unit tests have been added
  • Documentation has been updated

@anuraaga anuraaga requested a review from a team as a code owner March 19, 2026 06:18
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 19, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.78%. Comparing base (8ee2a8b) to head (1c6f8e1).
⚠️ Report is 9 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6504      +/-   ##
==========================================
+ Coverage   95.76%   95.78%   +0.02%     
==========================================
  Files         375      376       +1     
  Lines       12739    12801      +62     
  Branches     3013     3026      +13     
==========================================
+ Hits        12200    12262      +62     
  Misses        539      539              
Files with missing lines Coverage Δ
...imental/packages/opentelemetry-sdk-node/src/sdk.ts 96.08% <100.00%> (+0.15%) ⬆️
...ental/packages/opentelemetry-sdk-node/src/utils.ts 96.26% <100.00%> (+<0.01%) ⬆️
...dk-trace-base/src/export/BatchSpanProcessorBase.ts 95.52% <100.00%> (+0.36%) ⬆️
...y-sdk-trace-base/src/export/SimpleSpanProcessor.ts 95.23% <100.00%> (+1.12%) ⬆️
...-sdk-trace-base/src/export/SpanProcessorMetrics.ts 100.00% <100.00%> (ø)
...ckages/opentelemetry-sdk-trace-base/src/semconv.ts 100.00% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

const meter = config?.meterProvider
? config.meterProvider.getMeter('@opentelemetry/sdk-trace')
: createNoopMeter();
this._metrics = new SpanProcessorMetrics(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the SpanProcessorMetrics constructor, a callback is added to queueSize using meter.createObservableUpDownCounter().addCallback(). Looks like this callback is never removed. Since the callback captures this (to call getQueueSize), it prevents the SpanProcessor and its metrics from being garbage collected even after shutdown.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thanks - I can add explicit cleanup. But just curious, shouldn't the GC still be handle cycles like that or is there something else at play?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a cycle issue. The problem is that the Meter lives longer than the span processor and holds onto the callback. Since the callback references the processor, the processor can never be garbage collected, even after shutdown(). Removing the callback in shutdown() should fix it.

Copy link
Copy Markdown
Contributor

@trentm trentm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, modulo the removeCallback thing that Jackson brought up.

My manual test (in "experimental/packages/opentelemetry-sdk-node/"):

play.js

const {trace} = require('@opentelemetry/api');
const {NodeSDK} = require('./build/src/index.js');

const sdk = new NodeSDK();
sdk.start();
process.once('beforeExit', async () => {
  return sdk.shutdown();
});

const tracer = trace.getTracer('manual');
tracer.startActiveSpan('myspan', span => {
  span.end();
})

// Stay alive.
setInterval(() => {
  console.log('.')
}, 10000);

Run a mock/dev collector that shows a summary of received OTLP:

npx @elastic/mockotlpserver -o summary,spacer

Run the play script:

OTEL_NODE_EXPERIMENTAL_SDK_METRICS=true \
  OTEL_METRIC_EXPORT_INTERVAL=5000 \
  OTEL_METRIC_EXPORT_TIMEOUT=5000 \
  node play.js

Wait a bit, and the telemetry summary is:

------ metrics ------
sum "otel.sdk.span.started" (unit={span}, aggTemp=cumulative): 1 { 'otel.span.parent.origin': 'none', 'otel.span.sampling_result': 'RECORD_AND_SAMPLE' }
sum "otel.sdk.span.live" (unit={span}, aggTemp=cumulative): 0 { 'otel.span.sampling_result': 'RECORD_AND_SAMPLE' }
sum "otel.sdk.processor.span.queue.capacity" (unit={span}, aggTemp=cumulative): 2048 { 'otel.component.type': 'batching_span_processor', 'otel.component.name': 'batching_span_processor/0' }
sum "otel.sdk.processor.span.queue.size" (unit={span}, aggTemp=cumulative): 1 { 'otel.component.type': 'batching_span_processor', 'otel.component.name': 'batching_span_processor/0' }
------ trace 3dc16f (1 span) ------
       span a80ba5 "myspan" (0.1ms, SPAN_KIND_INTERNAL, service.name=unknown_service:node, scope=manual)

------ metrics ------
sum "otel.sdk.span.started" (unit={span}, aggTemp=cumulative): 1 { 'otel.span.parent.origin': 'none', 'otel.span.sampling_result': 'RECORD_AND_SAMPLE' }
sum "otel.sdk.span.live" (unit={span}, aggTemp=cumulative): 0 { 'otel.span.sampling_result': 'RECORD_AND_SAMPLE' }
sum "otel.sdk.processor.span.processed" (unit={span}, aggTemp=cumulative): 1 { 'otel.component.type': 'batching_span_processor', 'otel.component.name': 'batching_span_processor/0' }
sum "otel.sdk.processor.span.queue.capacity" (unit={span}, aggTemp=cumulative): 2048 { 'otel.component.type': 'batching_span_processor', 'otel.component.name': 'batching_span_processor/0' }
sum "otel.sdk.processor.span.queue.size" (unit={span}, aggTemp=cumulative): 0 { 'otel.component.type': 'batching_span_processor', 'otel.component.name': 'batching_span_processor/0' }

Which looks good to me.

])
: getSpanProcessorsFromEnv(
sdkMetricsEnabled ? this._meterProvider : undefined
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This ternary getting a bit complex, would be clearer as an if-block.

@trentm trentm requested a review from JacksonWeber April 2, 2026 20:30
: getSpanProcessorsFromEnv();
// While SDK metrics are unstable, we require an opt-in.
// https://opentelemetry.io/docs/specs/semconv/otel/sdk-metrics/
const sdkMetricsEnabled = getBooleanFromEnv(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, there is a merge issue here now, because #6433 went in.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup fixed it - it seems the code was slightly different enough in the PRs to automerge unexpectedly 😂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants